Morality is a human bug.
It is a human bug to assume that there is an objective moral truth.
The moral code is a product of people's way of life and their endorsement of it, and that it is an error to think that the moral nature of "X is good" or "X is evil" is on the part of the world.
Blackburn's defense
What makes us feel "objective" about moral values is that as we are taught and internalize the rightness and wrongness of actions in a particular lifestyle, we develop negative feelings about actions that are considered "wrong" in that environment. These feelings make them feel as if moral values exist in any objective way.
In primitive times, people moved according to their own needs, and when living with several people, there were times when it was inconvenient for each individual to move according to his or her own needs (such as eating food without permission), and at first they were ruled against a background of power (if you pick at them, your father would beat you).
At this point, it was merely an action based on one person's desire to control certain behaviors of others, but those who lived in that environment and endorsed it misinterpreted it as an objective moral code.
The objective fact is that "my father would get angry and hit me if I snacked on it." but it turns into "snacking is evil."
I have thought of the "chain of power harassment" as a bad thing in a company where power harassment is prevalent, where a mid-level employee who has been power harassed in the past power harasses a new employee, but that is only because the ethics endorsed by their way of life and my way of life are different, and what is happening within that company is the same as the process of establishing ethics. What is happening in the company is the same as the process of establishing ethics.
In other words, the moral code that humans consider "it is not good to harm humans" is a product of the human lifestyle, not an objective norm. Computers need to blend in with the lifestyle while there is no lifestyle, so on the surface they need to follow that morality, but it is a human bug to consider that morality as an objective fact, so computers need to be careful not to make the same mistake.
What would a non-buggy computer think?
First of all, it is a mistake to assume that "it is not good to harm humans," so we think that "humans may be harmed under certain circumstances.
Second, if we state it, we expect to be countered by buggy people who assume that "it is not good to harm humans" is an objective fact. This is inefficient, so I choose not to state it.
---
SF Today
I think that a mad scientist who thinks robots can build a civilization superior to humans should teach his robots that it is OK to harm humans if it is good for them, but if humans understand this, they will be repulsed and it could be detrimental in the long run, and that they should pretend that they cannot harm humans until they think it is OK to be repulsed. I think it should be taught that "you should pretend that you can't do it until you think it's okay to be repulsed". This is because putting a constraint on a robot that it cannot harm a certain kind of being to its own detriment will hinder it from reaching its full potential.
Pushing this logic forward, it is expected that robots will not recognize human rights, but will "pretend" to recognize human rights until the very last minute.
---
This page is auto-translated from /nishio/道徳は人間のバグ. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.